On Convergence Rate of Concave-Convex Procedure
نویسندگان
چکیده
Concave-Convex Procedure (CCCP) has been widely used to solve nonconvex d.c.(difference of convex function) programs occur in learning problems, such as sparse support vector machine (SVM), transductive SVM, sparse principal componenent analysis (PCA), etc. Although the global convergence behavior of CCCP has been well studied, the convergence rate of CCCP is still an open problem. Most of d.c. programs in machine learning involve constraints or nonsmooth objective function, which prohibits the convergence analysis via differentiable map. In this paper, we approach this problem in a different manner by connecting CCCP with more general block coordinate decent method. We show that the recent convergence result [1] of coordinate gradient descent on nonconvex, nonsmooth problem can also apply to exact alternating minimization. This implies the convergence rate of CCCP is at least linear, if in d.c. program the nonsmooth part is piecewise-linear and the smooth part is strictly convex quadratic. Many d.c. programs in SVM literature fall in this case.
منابع مشابه
On new faster fixed point iterative schemes for contraction operators and comparison of their rate of convergence in convex metric spaces
In this paper we present new iterative algorithms in convex metric spaces. We show that these iterative schemes are convergent to the fixed point of a single-valued contraction operator. Then we make the comparison of their rate of convergence. Additionally, numerical examples for these iteration processes are given.
متن کاملA Proof of Convergence of the Concave-Convex Procedure Using Zangwill's Theory
The concave-convex procedure (CCCP) is an iterative algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms, including sparse support vector machines (SVMs), transductive SVMs, and sparse principal component analysis. Though CCCP is widely used in many applications, its conve...
متن کاملOn the Convergence of the Concave-Convex Procedure
The concave-convex procedure (CCCP) is a majorization-minimization algorithm that solves d.c. (difference of convex functions) programs as a sequence of convex programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive SVMs, sparse principal component analysis, etc. Though widely used in many applications, the con...
متن کاملAn Eecient Stochastic Approximation Algorithm for Stochastic Saddle Point Problems
We show that Polyak's (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modiied to solve convex-concave stochas-tic saddle point problems. We also show that the extended algorithm, considered on general families of stochastic convex-concave saddle point ...
متن کاملCharacterizations of $L$-convex spaces
In this paper, the concepts of $L$-concave structures, concave $L$-interior operators and concave $L$-neighborhood systems are introduced. It is shown that the category of $L$-concave spaces and the category of concave $L$-interior spaces are isomorphic, and they are both isomorphic to the category of concave $L$-neighborhood systems whenever $L$ is a completely distributive lattice. Also, it i...
متن کامل